Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization

نویسندگان

چکیده

This article is concerned with the multiagent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method developed by introducing a oracle in scheme where non-Euclidean Bregman divergence used. The classical gradient generalized without using subgradient information of objective functions. proposed algorithms are first zeroth-order methods, which achieve an approximate $O(\frac{1}{\sqrt{T}})$ notation="LaTeX">$T$-rate convergence, recovering best known optimal rate nonsmooth constrained convex optimization. Moreover, decentralized reciprocal weighted averaging (RWA) approximating sequence investigated, convergence for RWA shown to hold over time-varying graph. Rates comprehensively explored algorithm (DRGFMD-RWA). technique on constructing provides new insight searching minimizers algorithms.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mutiple-gradient Descent Algorithm for Multiobjective Optimization

The steepest-descent method is a well-known and effective single-objective descent algorithm when the gradient of the objective function is known. Here, we propose a particular generalization of this method to multi-objective optimization by considering the concurrent minimization of n smooth criteria {J i } (i = 1,. .. , n). The novel algorithm is based on the following observation: consider a...

متن کامل

A Weighted Mirror Descent Algorithm for Nonsmooth Convex Optimization Problem

Large scale nonsmooth convex optimization is a common problem for a range of computational areas including machine learning and computer vision. Problems in these areas contain special domain structures and characteristics. Special treatment of such problem domains, exploiting their structures, can significantly improve the computational burden. We present a weighted Mirror Descent method to so...

متن کامل

A Fast Distributed Stochastic Gradient Descent Algorithm for Matrix Factorization

The accuracy and effectiveness of matrix factorization technique were well demonstrated in the Netflix movie recommendation contest. Among the numerous solutions for matrix factorization, Stochastic Gradient Descent (SGD) is one of the most widely used algorithms. However, as a sequential approach, SGD algorithm cannot directly be used in the Distributed Cluster Environment (DCE). In this paper...

متن کامل

Mirror-Descent-like Algorithms for Submodular Optimization

In this paper we develop a framework of submodular optimization algorithms in line with the mirror-descent style of algorithms for convex optimization. We use the fact that a submodular function has both a subdifferential and a superdifferential, which enables us to formulate algorithms for both submodular minimization and maximization. This reveals a unifying framework for a number of submodul...

متن کامل

Distributed Stochastic Optimization via Adaptive Stochastic Gradient Descent

Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial in many applications, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial algorithm that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic op...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2022

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2021.3075669